# Text Summarization
Mt5 Base Cnn Then Norsumm
A text summarization model trained based on the mt5-base architecture, capable of extracting key information from text to generate summaries.
Text Generation
Transformers

M
GloriaABK1
105
0
T5 Small Finetuned Xsum
Apache-2.0
A text summarization model fine-tuned on the XSum dataset based on T5-small
Text Generation
Transformers

T
bdwjaya
103
0
T5 Small Title Ft
Apache-2.0
T5 Small is the compact version of Google's T5 (Text-to-Text Transfer Transformer) model, suitable for various natural language processing tasks.
Text Generation
Transformers English

T
swarup3204
25
0
Finetuned T5 Small Abstract Summarizer
A model specialized in text summarization tasks, capable of extracting key information from long texts to generate concise summaries
Text Generation
F
JyothiYaga
15
0
Jina Embeddings V3 Gguf
This model is a summarization generation model capable of automatically generating concise summaries of input text.
Text Generation
J
jetuned
221
0
Bart Large CNN Onnx
BART Large CNN is a pre-trained model based on the BART architecture, specifically designed for text summarization tasks.
Text Generation
Transformers

B
c2p-cmd
13
0
GPT2 Summarizer
MIT
A text summarization model fine-tuned based on the GPT-2 architecture, supporting PyTorch and CoreML frameworks, suitable for generating concise and accurate text summaries.
Text Generation
Transformers

G
c2p-cmd
30
0
Summllama3.2 3B GGUF
SummLlama3.2-3B is a 3.2B-parameter summary generation model optimized based on the Llama3 architecture, offering multiple quantization versions to accommodate different hardware requirements.
Large Language Model
S
tensorblock
95
1
Bart Large Cnn
MIT
This is an ONNX-optimized version of the facebook/bart-large-cnn model, primarily used for text summarization tasks.
Text Generation
Transformers

B
philipp-zettl
15
0
T5 Small Common Corpus Topic Batch
Apache-2.0
A text processing model fine-tuned based on the T5-small architecture, focusing on text generation and transformation for specific tasks
Large Language Model
Transformers

T
Pclanglais
21
2
T5 Small
A fine-tuned model based on T5-small architecture, primarily used for text generation tasks, with average performance on ROUGE metrics
Large Language Model
Transformers

T
Hafis123
30
1
Haruhi Dialogue Speaker Extract Qwen18
Apache-2.0
A dialogue extraction model fine-tuned based on qwen-1.8, capable of batch extracting summaries and dialogues from novel excerpts
Text Generation
Transformers Supports Multiple Languages

H
silk-road
17
3
Distilbart Cnn 12 6
DistilBART-CNN-12-6 is a distilled version of the BART model, optimized for text summarization tasks, with a smaller size while maintaining high performance.
Text Generation
Transformers

D
Xenova
218
0
Flan T5 Base Summarization
A text summarization model based on FLAN-T5, suitable for automatic summarization of English texts.
Text Generation
Transformers English

F
marianna13
148
3
Bart Large Cnn
A large-scale text summarization model based on the BART architecture, optimized for the CNN/DailyMail dataset
Text Generation
Transformers

B
Xenova
173
8
T5 Small Openvino
Apache-2.0
OpenVINO IR format version of the T5-small model, supporting text generation, translation, and other tasks
Large Language Model
Transformers Supports Multiple Languages

T
echarlaix
3,749
4
Distilbart Cnn 6 6
Apache-2.0
DistilBART is a distilled version of the BART model, optimized for text summarization tasks, significantly improving inference speed while maintaining high performance.
Text Generation English
D
sshleifer
48.17k
31
Pegasus Large
PEGASUS is an abstractive summarization model based on pre-training with gap sentences, developed by Google Research.
Text Generation English
P
google
43.35k
103
Distilroberta Base Model Transcript
Apache-2.0
A text processing model fine-tuned based on the distilroberta-base model, suitable for general NLP tasks
Large Language Model
Transformers

D
mahaamami
14
0
Distilbart Cnn 12 6 Finetuned Weaksup 1000
Apache-2.0
A text summarization generation model fine-tuned on the distilbart-cnn-12-6 model, trained for 1000 steps with weakly supervised data
Text Generation
Transformers

D
cammy
79
1
Bimeanvae Amzn
Bsd-3-clause
BiMeanVAE is a model based on Variational Autoencoder (VAE), primarily used for text summarization tasks.
Text Generation
Transformers English

B
megagonlabs
85
0
Keybart
Apache-2.0
KeyBART is a pre-trained text generation model based on the BART architecture, specifically designed to generate concatenated keyphrase sequences in the CatSeqD format.
Large Language Model
Transformers

K
bloomberg
737
40
Bart Tl Ng
Apache-2.0
A weakly supervised topic label generation model based on BART, solving topic labeling tasks through generation rather than selection
Text Generation
Transformers English

B
cristian-popa
189
4
Distilbart Xsum 1 1
Apache-2.0
DistilBART is a distilled version of the BART model, optimized for text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Text Generation English
D
sshleifer
2,198
0
Kogpt2 Base V2
KoGPT2 is a Korean GPT-2 model developed by SKT-AI, based on the Transformer architecture, suitable for various Korean text generation tasks.
Large Language Model Korean
K
skt
105.25k
47
T5 Small Finetuned Xsum
Apache-2.0
A text summarization model fine-tuned on an unknown dataset based on T5-small, excelling at generating concise summaries
Large Language Model
Transformers

T
Rocketknight1
38
0
Featured Recommended AI Models